AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Self-Attention Optimization

# Self-Attention Optimization

Cursa O1 7b V1.1
This is a pre-trained language model fused using the SLERP method, combining the strengths of both the pre-cursa-o1-v1.2 and post-cursa-o1 models.
Large Language Model Transformers
C
marcuscedricridia
40
2
Blockchainlabs 7B Merged Test2 4
blockchainlabs_7B_merged_test2_4 is a 7B-parameter large language model created by merging mlabonne/NeuralBeagle14-7B and udkai/Turdus using the mergekit tool.
Large Language Model Transformers
B
alnrg2arg
90
3
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase